Goto

Collaborating Authors

 plausibility measure


Common $p$-Belief with Plausibility Measures: Extended Abstract

Pacuit, Eric, Yang, Leo

arXiv.org Artificial Intelligence

Aumann's famous Agreeing to Disagree Theorem states that if a group of agents share a common prior, update their beliefs by Bayesian conditioning based on private information, and have common knowledge of their posterior beliefs regarding some event, these posteriors must be identical. There is an elegant generalization of this theorem by Monderer and Samet, later refined by Neeman: if a group of agents share a common prior, update their beliefs using Bayesian conditioning on private information, and have common p-belief of their posteriors, these posteriors must be close (i.e., they cannot differ by more than 1 - p). Here, common p-belief generalizes the concept of common knowledge to probabilistic beliefs: agents commonly p-believe an event E if everyone believes E to at least degree p, everyone believes to at least degree p that everyone believes E to at least degree p, and so on. This paper further extends the Monderer-Samet-Neeman Agreement Theorem from classical probability measures to plausibility measures -- a very general framework introduced by Halpern that unifies many formal models of belief. To facilitate this extension, we provide a new proof of the Monderer-Samet-Neeman theorem in the classical setting. Building upon both the original proof and our new proof, we offer two different generalizations of the theorem to plausibility-based structures. We then apply these generalized results to several non-classical belief models, including conditional probability structures and lexicographic probability structures. Moreover, we show that whenever our generalized theorems do not apply, the Monderer-Samet-Neeman Agreement Theorem fails. These findings suggest that our results successfully identify the minimal conditions required for a belief model to satisfy the Monderer-Samet-Neeman Agreement Theorem.


Inter Observer Variability Assessment through Ordered Weighted Belief Divergence Measure in MAGDM Application to the Ensemble Classifier Feature Fusion

Gupta, Pragya, Chakraborty, Debjani, Guha, Debashree

arXiv.org Artificial Intelligence

A large number of multi-attribute group decisionmaking (MAGDM) have been widely introduced to obtain consensus results. However, most of the methodologies ignore the conflict among the experts opinions and only consider equal or variable priorities of them. Therefore, this study aims to propose an Evidential MAGDM method by assessing the inter-observational variability and handling uncertainty that emerges between the experts. The proposed framework has fourfold contributions. First, the basic probability assignment (BPA) generation method is introduced to consider the inherent characteristics of each alternative by computing the degree of belief. Second, the ordered weighted belief and plausibility measure is constructed to capture the overall intrinsic information of the alternative by assessing the inter-observational variability and addressing the conflicts emerging between the group of experts. An ordered weighted belief divergence measure is constructed to acquire the weighted support for each group of experts to obtain the final preference relationship. Finally, we have shown an illustrative example of the proposed Evidential MAGDM framework. Further, we have analyzed the interpretation of Evidential MAGDM in the real-world application for ensemble classifier feature fusion to diagnose retinal disorders using optical coherence tomography images.


Basic concepts, definitions, and methods in D number theory

Deng, Xinyang

arXiv.org Artificial Intelligence

Although DST has many advantages in representing and dealing with uncertainty, but it is limited by some hypotheses and constraints that are hardly satisfied in some situation [3-6]. There are two main aspects. First, in DST a frame of discernment (FOD) must be composed of mutually exclusive elements, which is called the FOD's exclusiveness hypothesis. Second, in DST the sum of basic probabilities or belief m(.) in a basic probability assignment (BPA) must be 1 (or basic probabilities can not be assigned to elements outside the FOD), which is called the BPA's completeness constraint. To overcome the above-mentioned limitations in DST, a new generalization of DST, called D number theory (DNT), has been proposed in recently [7, 8] for the fusion of uncertain information with non-exclusiveness and incompleteness. The theory of DNT stems from the concept of D numbers [9-16], and aims to build a more sophisticated framework for representing and reasoning with uncertain information similar to DST from a generic setmembership perspective, in which DNT relaxes the exclusiveness constraint of elements in FOD and completeness assumption of BPA in DST.


Belief and plausibility measures for D numbers

Deng, Xinyang

arXiv.org Artificial Intelligence

As a generalization of Dempster-Shafer theory, D number theory provides a framework to deal with uncertain information with non-exclusiven ess and incompleteness. However, some basic concepts in D number theory are not well defined. In this note, the belief and plausibility measures for D nu m-bers have been proposed, and basic properties of these measure s have been revealed as well. Keywords: Belief measure, Plausibility measure, D numbers, Dempster-Shafer theory 1. Introduction Dempster-Shafer evidence theory (DST) [1, 2] is one of the most p opular theories for dealing with uncertain information, and has been widely u sed in various fields [3-5]. But it is limited by some hypotheses and constrain ts that are hardly satisfied in some situation [6-9]. There are two main as pects.


Conditional Plausibility Measures and Bayesian Networks

Halpern, Joseph Y.

arXiv.org Artificial Intelligence

A general notion of algebraic conditional plausibility measures is defined. Probability measures, ranking functions, possibility measures, and (under the appropriate definitions) sets of probability measures can all be viewed as defining algebraic conditional plausibility measures. It is shown that the technology of Bayesian networks can be applied to algebraic conditional plausibility measures.


Compact Representations of Extended Causal Models

Halpern, Joseph Y., Hitchcock, Christopher

arXiv.org Artificial Intelligence

One of Judea Pearl's many, many important contributions to the study of causality was the first attempt to use the mathematical tools of causal modeling to give an account of "actual causation", a notion that has been of considerable interest among philosophers and legal theorists (Pearl, 2000, Chapter 10). Pearl later revised his account of actual causation in joint work with Halpern (Halpern & Pearl, 2005). A number of authors (Hall, 2007; Halpern, 2008; Hitchcock, 2007; Menzies, 2004) have suggested that an account of actual causation must be sensitive to considerations of normality, as well as to causal structure. In (Halpern & Hitchcock, 2011), we suggest a way of incorporating considerations of normality into the Halpern-Pearl theory, and show how to extend the account to illuminate features of the psychology of causal judgment, as well as features of causal reasoning in the law. Our account of actual causation makes use of "extended causal models", which include both structural equations among a set of variables, and a partial preorder on possible worlds, which represents the relative "normality" of those worlds. We actually want to think of people as working with the structural equations and normality order to evaluate actual causation. However, consideration of even simple examples immediately suggests a problem. A direct representation of the equations and normality order is too cumbersome for cognitively limited agents to use effectively. If our account of actual causation is to be at all realistic as a model of human causal judgment, some form of compact representation will be needed.


A Qualitative Markov Assumption and its Implications for Belief Change

Friedman, Nir, Halpern, Joseph Y.

arXiv.org Artificial Intelligence

The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. Roughly, revision treats a surprising observation as a sign that previous beliefs were wrong, while update treats a surprising observation as an indication that the world has changed. In general, we would expect that an agent making an observation may both want to revise some earlier beliefs and assume that some change has occurred in the world. We define a novel approach to belief change that allows us to do this, by applying ideas from probability theory in a qualitative setting. The key idea is to use a qualitative Markov assumption, which says that state transitions are independent. We show that a recent approach to modeling qualitative uncertainty using plausibility measures allows us to make such a qualitative Markov assumption in a relatively straightforward way, and show how the Markov assumption can be used to provide an attractive belief-change model.


Axiomatic Foundations for a Class of Generalized Expected Utility: Algebraic Expected Utility

Weng, Paul

arXiv.org Artificial Intelligence

Expected Utility: Algebraic Expected Utility In this paper, we provide two axiomatizations of algebraic expected utility, which is a particular generalized expected utility, in a von Neumann-Morgenstern setting, i.e. uncertainty representation is supposed to be given and here to be described by a plausibility measure valued on a semiring, which could be partially ordered. We show that axioms identical to those for expected utility entail that preferences are represented by an algebraic expected utility. This algebraic approach allows many previous propositions (expected utility, binary possibilistic utility,...) to be unified in a same general framework and proves that the obtained utility enjoys the same nice features as expected utility: linearity, dynamic consistency, autoduality of the underlying uncertainty measure, autoduality of the decision criterion and possibility of modeling decision maker's attitude toward uncertainty.


Dynamic consistency and decision making under vacuous belief

Giang, Phan H.

arXiv.org Artificial Intelligence

The ideas about decision making under ignorance in economics are combined with the ideas about uncertainty representation in computer science. The combination sheds new light on the question of how artificial agents can act in a dynamically consistent manner. The notion of sequential consistency is formalized by adapting the law of iterated expectation for plausibility measures. The necessary and sufficient condition for a certainty equivalence operator for Nehring-Puppe's preference to be sequentially consistent is given.